Distilling Intractable Generative Models
نویسندگان
چکیده
A generative model’s partition function is typically expressed as an intractable multi-dimensional integral, whose approximation presents a challenge to numerical and Monte Carlo integration. In this work, we propose a new estimation method for intractable partition functions, based on distilling an intractable generative model into a tractable approximation thereof, and using the latter for proposing Monte Carlo samples. We empirically demonstrate that our method produces state-of-the-art estimates, even in combination with simple Monte Carlo methods.
منابع مشابه
Distilling Model Knowledge
Top-performing machine learning systems, such as deep neural networks, large ensembles and complex probabilistic graphical models, can be expensive to store, slow to evaluate and hard to integrate into larger systems. Ideally, we would like to replace such cumbersome models with simpler models that perform equally well. In this thesis, we study knowledge distillation, the idea of extracting the...
متن کاملCapturing the diversity of biological tuning curves using generative adversarial networks
Tuning curves characterizing the response selectivities of biological neurons often exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or random connectivity also give rise to diverse tuning curves. However, a general framework for fitting such models to experimentally measured tuning curves is lacking. We a...
متن کاملVariational Approaches for Auto-Encoding Generative Adversarial Networks
Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoen...
متن کاملSemi-rational Models of Conditioning: The Case of Trial Order
Bayesian treatments of animal conditioning start from a generative model that specifies precisely a set of assumptions about the structure of the learning task. Optimal rules for learning are direct mathematical consequences of these assumptions. In terms of Marr’s (1982) levels of analyses, the main task at the computational level would therefore seem to be to understand and characterize the s...
متن کاملLearning Deep Generative Models with Doubly Stochastic MCMC
We present doubly stochastic gradient MCMC, a simple and generic method for (approximate) Bayesian inference of deep generative models in the collapsed continuous parameter space. At each MCMC sampling step, the algorithm randomly draws a minibatch of data samples to estimate the gradient of log-posterior and further estimates the intractable expectation over latent variables via a Gibbs sample...
متن کامل